accountability and transparency
Fostering Trust and Quantifying Value of AI and ML
Artificial Intelligence (AI) and Machine Learning (ML) providers have a responsibility to develop valid and reliable systems. Much has been discussed about trusting AI and ML inferences (the process of running live data through a trained AI model to make a prediction or solve a task), but little has been done to define what that means. Those in the space of ML- based products are familiar with topics such as transparency, explainability, safety, bias, and so forth. Yet, there are no frameworks to quantify and measure those. Producing ever more trustworthy machine learning inferences is a path to increase the value of products (i.e., increased trust in the results) and to engage in conversations with users to gather feedback to improve products. In this paper, we begin by examining the dynamic of trust between a provider (Trustor) and users (Trustees). Trustors are required to be trusting and trustworthy, whereas trustees need not be trusting nor trustworthy. The challenge for trustors is to provide results that are good enough to make a trustee increase their level of trust above a minimum threshold for: 1- doing business together; 2- continuation of service. We conclude by defining and proposing a framework, and a set of viable metrics, to be used for computing a trust score and objectively understand how trustworthy a machine learning system can claim to be, plus their behavior over time.
- Europe > Switzerland > Geneva > Geneva (0.04)
- Oceania > Australia > Queensland (0.04)
- North America > United States > New Jersey > Mercer County > Princeton (0.04)
- North America > United States > Colorado > Boulder County > Boulder (0.04)
Simple Steps to Success: Axiomatics of Distance-Based Algorithmic Recourse
Hamer, Jenny, Valladares, Jake, Viswanathan, Vignesh, Zick, Yair
We propose a novel data-driven framework for algorithmic recourse that offers users interventions to change their predicted outcome. Existing approaches to compute recourse find a set of points that satisfy some desiderata -- e.g. an intervention in the underlying causal graph, or minimizing a cost function. Satisfying these criteria, however, requires extensive knowledge of the underlying model structure, often an unrealistic amount of information in several domains. We propose a data-driven, computationally efficient approach to computing algorithmic recourse. We do so by suggesting directions in the data manifold that users can take to change their predicted outcome. We present Stepwise Explainable Paths (StEP), an axiomatically justified framework to compute direction-based algorithmic recourse. We offer a thorough empirical and theoretical investigation of StEP. StEP offers provable privacy and robustness guarantees, and outperforms the state-of-the-art on several established recourse desiderata.
- Information Technology > Security & Privacy (1.00)
- Law (0.67)
The 12 Biggest AI Mistakes You Must Avoid
The benefits of AI are undeniable -- but so are the risks of getting it wrong. In this post, you'll learn the 12 biggest AI mistakes organizations make and get practical ways to avoid these common missteps so you can effectively harness the power of AI. AI is the most powerful technology humans have ever had access to -- and now, every organization can put it to good use and create value for customers. To fully realize the potential of AI, though, organizations must commit to its implementation and integration. It's crucial to invest in the right infrastructure, personnel, and training to ensure successful AI adoption and avoid half-hearted attempts that can lead to wasted resources and suboptimal results.
Why 2022 is only the beginning for AI regulation
Did you miss a session at the Data Summit? As the world becomes increasingly dependent on technology to communicate, attend school, do our work, buy groceries and more, artificial intelligence (AI) and machine learning (ML) play a bigger role in our lives. Living through the second year of the COVID-19 pandemic has shown the value of technology and AI. It has also revealed a dangerous side and regulators have responded accordingly. In 2021, across the world, governing bodies have been working to regulate how AI and ML systems are used.
- North America > United States > New York (0.05)
- North America > United States > Colorado (0.05)
- North America > Canada (0.05)
- (4 more...)
- Law > Statutes (1.00)
- Information Technology (1.00)
- Health & Medicine (1.00)
- (2 more...)
Algorithmic Hiring Needs a Human Face
The way we apply for jobs has changed radically over the last 20 years, thanks to the arrival of sprawling online job-posting boards like LinkedIn, Indeed, and ZipRecruiter, and the use by hiring organizations of artificial intelligence (AI) algorithms to screen the tsunami of résumés that now gush forth from such sites into human resources (HR) departments. With video-based online job interviews now harnessing AI to analyze candidates' use of language and their performance in gamified aptitude tests, recruitment is becoming a decidedly algorithmic affair. Yet all is not well in HR's brave new world. After quizzing 8,000 job applicants and 2,250 hiring managers in the U.S., Germany, and Great Britain, researchers at Harvard Business School, working with the consultancy Accenture, discovered that many tens of millions of people are being barred from consideration for employment by résumé screening algorithms that throw out applicants who do not meet an unfeasibly large number of requirements, many of which are utterly irrelevant to the advertised job. For instance, says Joe Fuller, the Harvard professor of management practice who led the algorithmic hiring research, nurses and graphic designers who need merely to use computers have been barred from progressing to job interviews for not having experience, or degrees, in computer programming.
- Europe > Germany (0.25)
- North America > United States > Minnesota > Hennepin County > Minneapolis (0.14)
- North America > United States > California > Los Angeles County > Santa Monica (0.04)
- (4 more...)
AI Regulation in Finance: Where Next?
In the last three years, financial regulators worldwide have been actively highlighting the need for responsible use of Artificial Intelligence/ Machine Learning (AI/ML). What have they been saying? What common underlying concerns and regulatory themes are emerging? What can the industry expect in the coming years, and how can it start responding now? To date, no major financial regulator has introduced explicit regulations dedicated to the use of AI/ML.
- Asia > Singapore (0.17)
- North America > United States (0.07)
- North America > Canada (0.05)
- (3 more...)
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (0.72)
- Law > Business Law (0.57)
- Government > Regional Government > Europe Government (0.49)
To what extent should we trust AI models when they extrapolate?
Yousefzadeh, Roozbeh, Cao, Xuenan
Many applications affecting human lives rely on models that have come to be known under the umbrella of machine learning and artificial intelligence. These AI models are usually complicated mathematical functions that map from an input space to an output space. Stakeholders are interested to know the rationales behind models' decisions and functional behavior. We study this functional behavior in relation to the data used to create the models. On this topic, scholars have often assumed that models do not extrapolate, i.e., they learn from their training samples and process new input by interpolation. This assumption is questionable: we show that models extrapolate frequently; the extent of extrapolation varies and can be socially consequential. We demonstrate that extrapolation happens for a substantial portion of datasets more than one would consider reasonable. How can we trust models if we do not know whether they are extrapolating? Given a model trained to recommend clinical procedures for patients, can we trust the recommendation when the model considers a patient older or younger than all the samples in the training set? If the training set is mostly Whites, to what extent can we trust its recommendations about Black and Hispanic patients? Which dimension (race, gender, or age) does extrapolation happen? Even if a model is trained on people of all races, it still may extrapolate in significant ways related to race. The leading question is, to what extent can we trust AI models when they process inputs that fall outside their training set? This paper investigates several social applications of AI, showing how models extrapolate without notice. We also look at different sub-spaces of extrapolation for specific individuals subject to AI models and report how these extrapolations can be interpreted, not mathematically, but from a humanistic point of view.
- North America > United States > New York (0.04)
- North America > United States > Connecticut > New Haven County > New Haven (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (3 more...)
Interpretable Machine Learning -- Fairness, Accountability, and Transparency in ML systems
Editor's note: Sayak is a speaker for ODSC West in San Francisco this November! Be sure to check out his talk, "Interpretable Machine Learning -- Fairness, Accountability and Transparency in ML systems," there! The problem is it is much harder to evaluate machine learning systems than to train them. "It requires responsibly requires doing more than just calculating loss metrics. Before putting a model into production, it's critical to audit training data and evaluate predictions for bias."
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency
Machine learning algorithms can take important decisions, sometimes legally binding, about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, qualities such as fairness, accountability and transparency of predictive systems are of paramount importance. Recent literature suggested voluntary self-reporting on these aspects of predictive systems -- e.g., data sheets for data sets -- but their scope is often limited to a single component of a machine learning pipeline, and producing them requires manual labour. To resolve this impasse and ensure high-quality, fair, transparent and reliable machine learning systems, we developed an open source toolbox that can inspect selected fairness, accountability and transparency aspects of these systems to automatically and objectively report them back to their engineers and users.
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency
Sokol, Kacper, Santos-Rodriguez, Raul, Flach, Peter
Machine learning algorithms can take important decisions, sometimes legally binding, about our everyday life. In most cases, however, these systems and decisions are neither regulated nor certified. Given the potential harm that these algorithms can cause, qualities such as fairness, accountability and transparency of predictive systems are of paramount importance. Recent literature suggested voluntary self-reporting on these aspects of predictive systems -- e.g., data sheets for data sets -- but their scope is often limited to a single component of a machine learning pipeline, and producing them requires manual labour. To resolve this impasse and ensure high-quality, fair, transparent and reliable machine learning systems, we developed an open source toolbox that can inspect selected fairness, accountability and transparency aspects of these systems to automatically and objectively report them back to their engineers and users. We describe design, scope and usage examples of this Python toolbox in this paper. The toolbox provides functionality for inspecting fairness, accountability and transparency of all aspects of the machine learning process: data (and their features), models and predictions. It is available to the public under the BSD 3-Clause open source licence.
- Europe > United Kingdom (0.46)
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Information Technology > Security & Privacy (0.93)
- Energy > Oil & Gas > Upstream (0.54)